31 research outputs found

    A common scheme for running NLO ep event generators

    Full text link
    In this article we present a generic interface to several next-to-leading order cross-section programs. This enables the user to implement his/her code once and make cross-checks with different programs.Comment: 19 pages, Proceedings of Workshop on Monte Carlo Generators for HERA Physics 1998/9

    Event Shapes and Power Corrections at HERA

    Get PDF
    A measurement of event shape variables in neutral current deep inelastic ep scattering has been made at HERA with the ZEUS detector, using an integrated luminosity of 45.2pb-1. The variables thrust and broadening, with respect to the photon axis and the thrust axis, as well as the jet-mass and C-parameter, have been measured in the current region of the Breit frame in the kinematic range 10 < Q2 < 20480 GeV2 and 0.0006 < x < 0.6. The Q dependence of the event shapes have been compared to QCD predictions using Next-to-Leading Order calculations in conjunction with a power correction model to account for hadronisation. The model is tested by extracting the strong coupling constant alphas(Mz) and a new non-perturbative parameter, alpha0(muI)

    Next-Generation EU DataGrid Data Management Services

    Full text link
    We describe the architecture and initial implementation of the next-generation of Grid Data Management Middleware in the EU DataGrid (EDG) project. The new architecture stems out of our experience and the users requirements gathered during the two years of running our initial set of Grid Data Management Services. All of our new services are based on the Web Service technology paradigm, very much in line with the emerging Open Grid Services Architecture (OGSA). We have modularized our components and invested a great amount of effort towards a secure, extensible and robust service, starting from the design but also using a streamlined build and testing framework. Our service components are: Replica Location Service, Replica Metadata Service, Replica Optimization Service, Replica Subscription and high-level replica management. The service security infrastructure is fully GSI-enabled, hence compatible with the existing Globus Toolkit 2-based services; moreover, it allows for fine-grained authorization mechanisms that can be adjusted depending on the service semantics.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla,Ca, USA, March 2003 8 pages, LaTeX, the file contains all LaTeX sources - figures are in the directory "figures

    Resummed event-shape variables in DIS

    Get PDF
    We complete our study of resummed event-shape distributions in DIS by presenting results for the class of observables that includes the current jet mass, the C-parameter and the thrust with respect to the current-hemisphere thrust axis. We then compare our results to data for all observables for which data exist, fitting for alpha_s and testing the universality of non-perturbative 1/Q effects. A number of technical issues arise, including the extension of the concept of non-globalness to the case of discontinuous globalness; singularities and non-convergence of distributions other than in the Born limit; methods to speed up fixed-order Monte Carlo programs by up to an order of magnitude, relevant when dealing with many x and Q points; and the estimation of uncertainties on the predictions.Comment: 41 page

    Storage Resource Manager version 2.2: design, implementation, and testing experience

    Get PDF
    Storage Services are crucial components of the Worldwide LHC Computing Grid Infrastructure spanning more than 200 sites and serving computing and storage resources to the High Energy Physics LHC communities. Up to tens of Petabytes of data are collected every year by the four LHC experiments at CERN. To process these large data volumes it is important to establish a protocol and a very efficient interface to the various storage solutions adopted by the WLCG sites. In this work we report on the experience acquired during the definition of the Storage Resource Manager v2.2 protocol. In particular, we focus on the study performed to enhance the interface and make it suitable for use by the WLCG communities. At the moment 5 different storage solutions implement the SRM v2.2 interface: BeStMan (LBNL), CASTOR (CERN and RAL), dCache (DESY and FNAL), DPM (CERN), and StoRM (INFN and ICTP). After a detailed inside review of the protocol, various test suites have been written identifying the most effective set of tests: the S2 test suite from CERN and the SRM-Tester test suite from LBNL. Such test suites have helped verifying the consistency and coherence of the proposed protocol and validating existing implementations. We conclude our work describing the results achieved

    Event shapes in e+e- annihilation and deep inelastic scattering

    Full text link
    This article reviews the status of event-shape studies in e+e- annihilation and DIS. It includes discussions of perturbative calculations, of various approaches to modelling hadronisation and of comparisons to data.Comment: Invited topical review for J.Phys.G; 40 pages; revised version corrects some nomenclatur

    WLCG Collaboration Workshop (Tier0/Tier1/Tier2)

    No full text

    Configuration Management

    No full text

    Managing the CERN Batch System with Kubernetes

    No full text
    The CERN Batch Service faces many challenges in order to get ready for the computing demands of future LHC runs. These challenges require that we look at all potential resources, assessing how efficiently we use them and that we explore different alternatives to exploit opportunistic resources in our infrastructure as well as outside of the CERN computing centre. Several projects, like BEER, Helix Nebula Science Cloud and the new OCRE project, have proven our ability to run batch workloads on a wide range of non-traditional resources. However, the challenge is not only to obtain the raw compute resources needed but how to define an operational model that is cost and time efficient, scalable and flexible enough to adapt to a heterogeneous infrastructure. In order to tackle both the provisioning and operational challenges it was decided to use Kubernetes. By using Kubernetes we benefit from a de-facto standard in containerised environments, available in nearly all cloud providers and surrounded by a vibrant ecosystem of open-source projects. Leveraging Kubernetes’ built-in functionality, and other open-source tools such as Helm, Terraform and GitLab CI, we have deployed a first cluster prototype which we discuss in detail. The effort has simplified many of the existing operational procedures we currently have, but has also made us rethink established procedures and assumptions that were only valid in a VM-based cloud environment. This contribution presents how we have adopted Kubernetes into the CERN Batch Service, the impact its adoption has in daily operations, a comparison on resource usage efficiency and the experience so far evolving our infrastructure towards this model
    corecore